1,103 research outputs found

    Paesaggi pastorali nella montagna veneta: archeologia ed etnoarcheologia

    Get PDF
    The Italian Prealps are a natural passage between the alpine world and the Po plain. The highlands have always been easily reached from the Po plain, especially in the east, thanks to the gentle mountain ridges and the deep north-south valleys. The eastern Italian Prealps have therefore been exploited from historical times to the present day for many purposes that are typical of a mountain zone: in the area between lake Garda and Brenta river, which is the focus of this paper, charcoal was made from wood, which was also used as building material; a poor agriculture was developed; mining activities were performed; stone quarry workers and stone dressers left remains of open quarries; during 18th and 19th centuries, the production of ice became important. The most important activity in the uplands was stock-raising from the 10th century shepherds have been using the high pastures, crossing the almost impassable woodland belt; during the 18th and 19th centuries, cattle husbandry prevailed over sheep rearing. Ethnoarchaeological and archaeological projects have been carried out in the study area in order to detect and document the traces of human activities, especially shepherds and sheep farming. To date, it has been possible to locate hundreds of sheep folds, shepherds’ shelters and breeders’ houses in the uplands; to interview the last shepherds in the lowlands, which turn out to be complementary to the use of uplands for animal breeding; to discover that the most ancient traces of organized human exploitation in the uplands go back to the Bronze Age, while during the Iron Age a change in upland economy is evident, possibly connected with the organization of larger territorial communities and their boundaries; in Roman times the exploitation of uplands seem to be connected with a network of sanctuaries, inherited from the Iron age

    Integrating Scale Out and Fault Tolerance in Stream Processing using Operator State Management

    Get PDF
    As users of big data applications expect fresh results, we witness a new breed of stream processing systems (SPS) that are designed to scale to large numbers of cloud-hosted machines. Such systems face new challenges: (i) to benefit from the pay-as-you-go model of cloud computing, they must scale out on demand, acquiring additional virtual machines (VMs) and parallelising operators when the workload increases; (ii) failures are common with deployments on hundreds of VMs - systems must be fault-tolerant with fast recovery times, yet low per-machine overheads. An open question is how to achieve these two goals when stream queries include stateful operators, which must be scaled out and recovered without affecting query results. Our key idea is to expose internal operator state explicitly to the SPS through a set of state management primitives. Based on them, we describe an integrated approach for dynamic scale out and recovery of stateful operators. Externalised operator state is checkpointed periodically by the SPS and backed up to upstream VMs. The SPS identifies individual operator bottlenecks and automatically scales them out by allocating new VMs and partitioning the check-pointed state. At any point, failed operators are recovered by restoring checkpointed state on a new VM and replaying unprocessed tuples. We evaluate this approach with the Linear Road Benchmark on the Amazon EC2 cloud platform and show that it can scale automatically to a load factor of L=350 with 50 VMs, while recovering quickly from failures. Copyright © 2013 ACM

    On Context-Aware Publish-Subscribe

    Get PDF
    Complex communication patterns often need to take into account the characteristics of the environment, or the situation, in which the information to be communicated is produced or consumed. Publish-subscribe, and particularly its content-based incarnation, is often used to convey this information by encoding the “context ” of the publisher into the published messages, taking advantage of the expressiveness of content-based addressing to encode context-aware communication patterns. In this paper we claim that this approach is both inadequate and inefficient and propose a context-aware publish-subscribe model of communication as a better alternative. In particular, we describe the API of a new publish-subscribe model that is both content and context-based, and we explore possible routing schemas to implement this new model in a distributed publish-subscribe system potentially improving traditional content-based routing

    Estimating causal networks in biosphere–atmosphere interaction with the PCMCI approach

    Get PDF
    Local meteorological conditions and biospheric activity are tightly coupled. Understanding these links is an essential prerequisite for predicting the Earth system under climate change conditions. However, many empirical studies on the interaction between the biosphere and the atmosphere are based on correlative approaches that are not able to deduce causal paths, and only very few studies apply causal discovery methods. Here, we use a recently proposed causal graph discovery algorithm, which aims to reconstruct the causal dependency structure underlying a set of time series. We explore the potential of this method to infer temporal dependencies in biosphere-atmosphere interactions. Specifically we address the following questions: How do periodicity and heteroscedasticity influence causal detection rates, i.e. the detection of existing and non-existing links? How consistent are results for noise-contaminated data? Do results exhibit an increased information content that justifies the use of this causal-inference method? We explore the first question using artificial time series with well known dependencies that mimic real-world biosphere-atmosphere interactions. The two remaining questions are addressed jointly in two case studies utilizing observational data. Firstly, we analyse three replicated eddy covariance datasets from a Mediterranean ecosystem at half hourly time resolution allowing us to understand the impact of measurement uncertainties. Secondly, we analyse global NDVI time series (GIMMS 3g) along with gridded climate data to study large-scale climatic drivers of vegetation greenness. Overall, the results confirm the capacity of the causal discovery method to extract time-lagged linear dependencies under realistic settings. The violation of the method's assumptions increases the likelihood to detect false links. Nevertheless, we consistently identify interaction patterns in observational data. Our findings suggest that estimating a directed biosphere-atmosphere network at the ecosystem level can offer novel possibilities to unravel complex multi-directional interactions. Other than classical correlative approaches, our findings are constrained to a few meaningful set of relations which can be powerful insights for the evaluation of terrestrial ecosystem models

    On the potential of Sentinel-2 for estimating Gross Primary Production

    Get PDF

    Financial and Accounting Approaches in Lease Appraisal

    Get PDF
    The determination of the residual debt at a given date of a lease agreement, when it occurs the case of insolvency or continuous arrears (i.e. an early termination, before the maturity of the lease plan), is often regulated by the contract, which fixes penalties and some kind of impairment reimbursement. Both the lessor and the lessee are required to calculate separately for the amount of the outstanding debt and the sum of the impairment reimbursement and of the penalties. In this paper, the authors propose a model for a precise quantification of the residual debt, the damage impairment and the penalty shares based on three rates: the contractual rate, the implicit internal rate(s) of return and the market prime rate. This model is consistent with both finance and accounting perspectives. The developed methodology can also be proven capable to detect early any usury behavior, when it is given a threshold by the law or when it can be inferred from the market, therefore improving decision making and the forecasting of actual costs of the agreement

    Scalable and Fault-tolerant Stateful Stream Processing.

    Get PDF
    As users of "big data" applications expect fresh results, we witness a new breed of stream processing systems (SPS) that are designed to scale to large numbers of cloud-hosted machines. Such systems face new challenges: (i) to benefit from the "pay-as-you-go" model of cloud computing, they must scale out on demand, acquiring additional virtual machines (VMs) and parallelising operators when the workload increases; (ii) failures are common with deployments on hundreds of VMs—systems must be fault-tolerant with fast recovery times, yet low per-machine overheads. An open question is how to achieve these two goals when stream queries include stateful operators, which must be scaled out and recovered without affecting query results. Our key idea is to expose internal operator state explicitly to the SPS through a set of state management primitives. Based on them, we describe an integrated approach for dynamic scale out and recovery of stateful operators. Externalised operator state is checkpointed periodically by the SPS and backed up to upstream VMs. The SPS identifies individual operator bottlenecks and automatically scales them out by allocating new VMs and partitioning the checkpointed state. At any point, failed operators are recovered by restoring checkpointed state on a new VM and replaying unprocessed tuples. We evaluate this approach with the Linear Road Benchmark on the Amazon EC2 cloud platform and show that it can scale automatically to a load factor of L=350 with 50 VMs, while recovering quickly from failures

    Making State Explicit for Imperative Big Data Processing

    Get PDF
    Data scientists often implement machine learning algorithms in imperative languages such as Java, Matlab and R. Yet such implementations fail to achieve the performance and scalability of specialised data-parallel processing frameworks. Our goal is to execute imperative Java programs in a data-parallel fashion with high throughput and low latency. This raises two challenges: how to support the arbitrary mutable state of Java programs without compromising scalability, and how to re cover that state after failure with low overhead. Our idea is to infer the dataflow and the types of state accesses from a Java program and use this information to generate a stateful dataflow graph (SDG) . By explicitly separating data from mutablestate, SDGs have specific features to enable this translation: to ensure scalability, distributed state can be partitioned across nodes if computation can occur entirely in parallel; if this is not possible, partial state gives nodes local instances for independent computation, which are reconciled according to application semantics. For fault tolerance, large inmemory state is checkpointed asynchronously without global coordination. We show that the performance of SDGs for several imperative online applications matches that of existing data-parallel processing frameworks
    • 

    corecore